The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
供应链平台(SCP)为下游行业提供了许多原材料。与传统的电子商务平台相比,由于用户兴趣有限,SCP中的数据更为稀疏。为了解决数据稀疏问题,可以应用跨域建议(CDR),从而通过源域信息提高目标域的建议性能。但是,将CDR应用于SCP,直接忽略了SCP中商品的层次结构,从而降低了建议性能。为了利用此功能,在本文中,我们以餐饮平台为例,并提出了图形跨域推荐模型GRES。该模型首先构造了树状图,以表示菜肴和成分不同节点的层次结构,然后应用我们提出的Tree2Vec方法将GCN和BERT模型组合到嵌入图中以嵌入图表以获取建议。商业数据集上的实验结果表明,GRES在供应链平台的跨域建议中明显优于最先进的方法。
translated by 谷歌翻译
在受监督和无监督的设置的基于学习的多视图立体声(MV)中,已经看到了重大进展。为了结合其在准确性和完整性方面的优点,同时减少了对昂贵标签数据的需求,本文探讨了一种新型的基于学习的MVS问题的新型半监督设置,该设置只有MVS数据的一小部分与密集的深度地面真相相连。但是,由于方案和视图中灵活的设置的巨大变化,半监督的MVS问题(半MV)可能会破坏经典的半监督学习中的基本假设,该假设未标记的数据和标记的数据共享相同的标签空间和数据分布。为了解决这些问题,我们提出了一个新颖的半监督MVS框架,即SE-MVS。对于基本假设在MVS数据中起作用的简单情况,一致性正则化鼓励模型预测在原始样本和随机增强样品之间通过KL差异的限制保持一致。对于MVS数据中基本假设有冲突的进一步麻烦案例,我们提出了一种新型的样式一致性损失,以减轻分布差距引起的负面影响。未标记的样品的视觉样式被转移到标记的样品中以缩小差距,并且在原始标记的样品中使用标签进一步监督了生成样品的模型预测。 DTU,BlendenDMV,GTA-SFM和Tanks \&Temples数据集的实验结果显示了该方法的出色性能。在骨干网络中使用相同的设置,我们提出的SE-MV优于其完全监督和无监督的基线。
translated by 谷歌翻译
我们研究了图神经网络(GNN)的解释性,作为阐明其工作机制的一步。尽管大多数当前方法都集中在解释图节点,边缘或功能上,但我们认为,作为GNNS的固有功能机制,消息流对执行解释性更为自然。为此,我们在这里提出了一种新颖的方法,即FlowX,以通过识别重要的消息流来解释GNN。为了量化流量的重要性,我们建议遵循合作游戏理论中沙普利价值观的哲学。为了解决计算所有联盟边际贡献的复杂性,我们提出了一个近似方案,以计算类似沙普利的值,作为进一步再分配训练的初步评估。然后,我们提出一种学习算法来训练流量评分并提高解释性。关于合成和现实世界数据集的实验研究表明,我们提出的FlowX导致GNN的解释性提高。
translated by 谷歌翻译
许多3D表示(例如,点云)是下面连续3D表面的离散样本。该过程不可避免地介绍了底层的3D形状上的采样变化。在学习3D表示中,应忽略应忽略变化,而应捕获基础3D形状的可转换知识。这成为现有代表学习范式的大挑战。本文在点云上自动编码。标准自动编码范例强制编码器捕获这种采样变体,因为解码器必须重建具有采样变化的原始点云。我们介绍了隐式AutoEncoder(IAE),这是一种简单而有效的方法,通过用隐式解码器替换点云解码器来解决这一挑战。隐式解码器输出与相同模型的不同点云采样之间共享的连续表示。在隐式表示下重建可以优先考虑编码器丢弃采样变体,引入更多空间以学习有用的功能。在一个简单的线性AutoEncoder下,理论上理论地证明这一索赔。此外,隐式解码器提供丰富的空间来为不同的任务设计合适的隐式表示。我们展示了IAE对3D对象和3D场景的各种自我监督学习任务的有用性。实验结果表明,IAE在每项任务中始终如一地优于最先进的。
translated by 谷歌翻译
人工神经网络(ANN)通常仅限于通过学习一组静态参数来完成预定的任务。相比之下,生物神经网络(BNN)可以通过根据其观察值不断更新其连接权重来适应各种新任务,这与学习有效学习规则的范式相符,例如静态参数,例如元参数。在广泛的生物学启发的学习规则中,Hebbian可塑性使用本地信号更新神经网络权重,而无需明确的目标功能指导,并密切模拟了BNN的学习。然而,使用大规模元参数的典型塑料环境违反了基因组瓶颈的性质,并使概括能力恶化。这项工作提出了一个新的学习范式,将这些依赖连接的可塑性规则分解为神经元依赖性规则,因此可容纳$ o(n^2)$可学习参数,只有$ o(n)$ meta-parameters。分解的可塑性以及不同类型的神经调节术都适用于从头开始的递归神经网络,以适应不同的任务。我们的算法在挑战随机的2D迷宫环境中进行了测试,在这些环境中,代理商必须利用过去的经验来提高其性能,而无需任何明确的客观功能和人类干预,即通过互动来学习。结果表明,满足基因组瓶颈的规则比以前的基于模型和基于可塑性的元学习更好地适应了分布式任务。
translated by 谷歌翻译
Generative Adversarial Networks (GAN) training process, in most cases, apply Uniform or Gaussian sampling methods in the latent space, which probably spends most of the computation on examples that can be properly handled and easy to generate. Theoretically, importance sampling speeds up stochastic optimization in supervised learning by prioritizing training examples. In this paper, we explore the possibility of adapting importance sampling into adversarial learning. We use importance sampling to replace Uniform and Gaussian sampling methods in the latent space and employ normalizing flow to approximate latent space posterior distribution by density estimation. Empirically, results on MNIST and Fashion-MNIST demonstrate that our method significantly accelerates GAN's optimization while retaining visual fidelity in generated samples.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译